Rewrite on welcome to lf section#2872
Rewrite on welcome to lf section#2872annabellscha merged 4 commits intolangfuse-academy-researchfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@claude review |
|
|
||
| Across the loop, teams are usually balancing three things at once: output quality, latency, and cost. The point is not to optimize one number in isolation, but to make tradeoffs explicit and grounded in evidence from your own application. | ||
|
|
||
| ## Where the docs fit |
| That changes what teams need to understand and manage. It is no longer enough to ask whether the system ran. You also need a way to reason about output quality, iteration, and the tradeoffs that come with shipping AI products. | ||
|
|
||
| ## Why LLM observability is different | ||
| Langfuse Academy exists to help you build that mental model. It maps the AI engineering lifecycle step by step so you can understand how the pieces fit together and what it takes to move from prototype to production. |
There was a problem hiding this comment.
Swap mental model with understanding.
| Building with LLMs changes the job of engineering teams. Once outputs become probabilistic, a system can be technically healthy and still produce responses that are wrong, incomplete, off-brand, unsafe, or simply not useful. | ||
|
|
||
| Rather than focusing on individual product features, Academy is meant to help you understand the bigger picture, and how teams can work with that change in a systematic way. | ||
| That changes what teams need to understand and manage. It is no longer enough to ask whether the system ran. You also need a way to reason about output quality, iteration, and the tradeoffs that come with shipping AI products. |
There was a problem hiding this comment.
Change this to output, quality, cost, latency, and their trade-offs that come with AI shifting. With shipping AI products
|
|
||
| The loop is a working model, not a strict waterfall. Teams move through it repeatedly, and different parts of the loop become more important as a product matures. | ||
|
|
||
| ## The steps |
There was a problem hiding this comment.
Generally, let's try to make all steps actionable, so:
- Tracing
- monitoring
- building datasets
- experimenting
- evaluate
Something like that, so it's all the kind of same word.
There was a problem hiding this comment.
make sure to propagate
| ## Why we are publishing this | ||
|
|
||
| But LLM applications introduce a different kind of challenge. Their behavior is probabilistic: the same input can produce different outputs, and a response can look plausible even when it is wrong, incomplete, off-brand, unsafe, or simply unhelpful. In other words, a request can succeed technically and still fail for the user. | ||
| Langfuse is open source, and we want to open source the conceptual side of AI engineering too. The Academy is our attempt to make the mental models, vocabulary, and workflows behind LLM application development easier to access for everyone. |
There was a problem hiding this comment.
Is our way of making mental models, vocabulary, and workflows behind LLM application development easier to access for everyone
| - Software engineers moving into AI product development | ||
| - Product managers who need to reason about quality, iteration, and tradeoffs | ||
| - People learning the field and trying to understand the core concepts | ||
| - Technical and business leaders who need a working model of how AI systems are built and improved |
There was a problem hiding this comment.
Agents that help the humans figure out LLM engineering.
| "title": "Academy", | ||
| "pages": [ | ||
| "index", | ||
| "ai-engineering-loop", |
There was a problem hiding this comment.
Not sure if it's here, but it's weird that the AI engineering loop is nested in the AI engineering loop. It should just be one tab that I click on, and then I get the content; nothing to unfold.
|
|
||
| ## The steps | ||
|
|
||
| ### 1. Tracing |
There was a problem hiding this comment.
While I like the content, it's visually difficult because there is so much text. Can we somehow either bullet point or shorten it without losing a lot of content, because I think it gives the high-level overview so I
annabellscha
left a comment
There was a problem hiding this comment.
Please implement all commands.
| Tracing captures the full path of a request so you can inspect prompts, retrieved context, tool calls, outputs, latency, and cost in one place. Read [Tracing](/academy/tracing) for a breakdown of what a useful trace looks like and why traces become the foundation for everything else. | ||
|
|
||
| ### 2. Monitoring | ||
|
|
||
| Monitoring turns raw traces into ongoing visibility by tracking trends and surfacing the cases that deserve attention. Read [Monitoring](/academy/monitoring) to understand how teams watch quality, cost, latency, and production failures over time. | ||
|
|
||
| ### 3. Datasets | ||
|
|
||
| Datasets turn real scenarios into repeatable test cases so you can check whether a change helps across more than a handful of examples. Read [Datasets](/academy/datasets) for how to structure dataset items and when it makes sense to split or grow a dataset. | ||
|
|
||
| ### 4. Experiments | ||
|
|
||
| Experiments let you change one variable at a time and compare outputs against a stable baseline instead of relying on intuition alone. Read [Experiments](/academy/experiments) to see how to isolate variables, compare variants, and learn what actually improved. | ||
|
|
||
| ### 5. Evaluation | ||
|
|
||
| Evaluation is how you decide whether results are good enough to ship, using manual review, code-based checks, or LLM judges depending on the task. Read [Evaluate](/academy/evaluate) for how teams score outputs and turn qualitative judgments into a repeatable process. |
There was a problem hiding this comment.
🔴 All primary CTA links in the new ai-engineering-loop/overview.mdx and the rewritten index.mdx will 404. Each academy subsection contains only overview.mdx (no index.mdx), so under fumadocs these resolve at /academy/<section>/overview, not /academy/<section> — and unlike the docs section, no equivalent redirects exist in lib/redirects.js. Fix by either appending /overview to each link, or adding /academy/* entries to lib/redirects.js:60-71 analogous to the existing docs redirects.
Extended reasoning...
What's broken
Every primary call-to-action link added by this PR points to a bare /academy/<section> URL, but those URLs do not resolve. The affected links are:
content/academy/index.mdx:
- Line 30:
[The AI Engineering Loop](/academy/ai-engineering-loop) - Lines 32–36:
/academy/tracing,/academy/monitoring,/academy/datasets,/academy/experiments,/academy/evaluate
content/academy/ai-engineering-loop/overview.mdx:
- Line 18:
[Tracing](/academy/tracing) - Line 22:
[Monitoring](/academy/monitoring) - Line 26:
[Datasets](/academy/datasets) - Line 30:
[Experiments](/academy/experiments) - Line 34:
[Evaluate](/academy/evaluate)
Why they 404
I verified directly that each academy subfolder (ai-engineering-loop, tracing, monitoring, datasets, experiments, evaluate) contains only meta.json + overview.mdx — no index.mdx. Under fumadocs-core's default source loader, a file at content/academy/tracing/overview.mdx is exposed at URL /academy/tracing/overview. There is no automatic 'use overview as folder index' fallback — only an index.mdx in a folder takes the bare folder URL. So academySource.getPage(["tracing"]) returns undefined, and app/academy/[[...slug]]/page.tsx:14 falls through to notFound(), rendering AcademyNotFound.
The repo's own precedent confirms this
lib/redirects.js:60-71 explicitly maps these docs paths to their /overview children:
...[
"/docs/integrations",
"/docs/scores",
"/docs/datasets",
"/docs/security",
"/docs/observability",
"/docs/evaluation",
"/docs/metrics",
"/docs/api-and-data-platform",
"/docs/prompt-management",
].map((path) => [path, path + "/overview"]),These redirects exist precisely because those docs sections have the same structure (only overview.mdx, no index.mdx) and the bare folder URLs do not resolve on their own. Grepping lib/redirects.js for 'academy' returns only an unrelated /customers/khan-academy entry — no /academy/* redirect was added by this PR.
Step-by-step proof
- User lands on
/academy(works —content/academy/index.mdxexists at the root, mapped to/academy). - User clicks the first CTA,
[Tracing](/academy/tracing), on line 32 of the newindex.mdxor line 18 ofai-engineering-loop/overview.mdx. - Next.js routes the request to
app/academy/[[...slug]]/page.tsxwithslug = ["tracing"]. academySource.getPage(["tracing"])looks for a page whose URL path segments are["tracing"]. The only file undercontent/academy/tracing/isoverview.mdx, which fumadocs maps to slug["tracing", "overview"]— no match.- Result is
undefined, soif (!page) notFound();triggers. - The user sees the
AcademyNotFound404 page.
The same chain applies to all six bare-folder URLs introduced by this PR. The PR landing page itself loads, but the moment a reader clicks any 'next step' link, the journey breaks.
Impact
Every primary CTA in the two pages this PR adds/rewrites is broken. This is a release blocker for the Academy section: the rewritten welcome page exists specifically to send readers to the loop sub-sections, and the new ai-engineering-loop/overview.mdx is structured around five 'Read [section]' links that all 404. Greptile flagged a similar broken link to /academy/error-analysis in the pre-existing monitoring page but missed this systemic issue across the new content.
Fix
Two equivalent options:
- Append
/overviewat each link site — change/academy/tracing→/academy/tracing/overview, etc., in both files (matches whatlib/nav-links.tsxalready does for the docs section). - Add redirects to
lib/redirects.jsanalogous to lines 60–71:
...[
"/academy/ai-engineering-loop",
"/academy/tracing",
"/academy/monitoring",
"/academy/datasets",
"/academy/experiments",
"/academy/evaluate",
].map((path) => [path, path + "/overview"]),Option 2 is more robust because it also catches any future external links that target the bare folder URL.
There was a problem hiding this comment.
Partially addressed. Moving ai-engineering-loop to a top-level .mdx did fix the /academy/ai-engineering-loop link, but the other five CTAs still 404:
content/academy/index.mdx:20-24:/academy/tracing,/academy/monitoring,/academy/datasets,/academy/experiments,/academy/evaluatecontent/academy/ai-engineering-loop.mdx:25,33,48,56,64,86: same five plus theStart with TracingCTA
Each of these subfolders still only contains overview.mdx, and no /academy/* entries were added to lib/redirects.js (only the unrelated /customers/khan-academy mapping exists). Either append /overview at every link site, or add the same kind of redirect block as lib/redirects.js:60-71:
...[
"/academy/tracing",
"/academy/monitoring",
"/academy/datasets",
"/academy/experiments",
"/academy/evaluate",
].map((path) => [path, path + "/overview"]),66bf0c9
into
langfuse-academy-research
|
|
||
| <Cards num={3} className="gap-6"> | ||
| <Cards.Card | ||
| title="3. Building datasets" |
There was a problem hiding this comment.
🟡 On line 47, the card title reads title="3. Building datasets" (lowercase d), but the corresponding sidebar label in content/academy/datasets/meta.json and the link in content/academy/index.mdx:22 both read Building Datasets (uppercase D). A reader sees "Building datasets" on the card and then "Building Datasets" on the destination page. Suggest changing this to title="3. Building Datasets" to match.
Extended reasoning...
What the bug is
The new AI Engineering Loop page introduces five cards linking to the loop sub-sections. Four of those cards have single-word titles (Tracing, Monitoring, Experimenting, Evaluating) where casing is unambiguous. The one multi-word card — the third one, for datasets — uses sentence case on the card but Title Case everywhere else in the same PR.
The specific code path
content/academy/ai-engineering-loop.mdx:47→title="3. Building datasets"(lowercased).content/academy/datasets/meta.json→"title": "Building Datasets"(uppercaseD). This drives the sidebar entry and the page heading.content/academy/index.mdx:22→[Building Datasets](/academy/datasets)(uppercaseD). This is the link in the welcome page bullet list.
So the card on the loop page is the only place that uses lowercase d, and it diverges from both the sidebar label of the page it links to and the sibling link in the welcome page list.
Why existing code does not prevent it
There is no shared constant or central source of truth for these section titles — each is hand-typed. Nothing in the build will warn when a card label diverges from its target page title. The other four cards happen to be safe because their titles are single words.
Step-by-step proof
- Reader opens
/academy/ai-engineering-loopand sees five cards. The third reads "3. Building datasets". - Reader clicks the card, which navigates to
/academy/datasets. - fumadocs renders the page using
content/academy/datasets/meta.json, whosetitleis"Building Datasets". - The destination page header and sidebar entry both display "Building Datasets" — different casing from the card the reader just clicked.
- Separately, on
/academy(the index page), the bullet list at line 22 showsBuilding Datasets, also disagreeing with the loop card.
Impact
Purely cosmetic — no broken navigation, no functional issue. But it is the only multi-word card and the only one whose label disagrees with its destination, so the inconsistency is asymmetric and stands out.
Fix
Change content/academy/ai-engineering-loop.mdx:47 from title="3. Building datasets" to title="3. Building Datasets" to align with datasets/meta.json and index.mdx.
Disclaimer: Experimental PR review
Greptile Summary
This PR rewrites the Langfuse Academy welcome section:
index.mdxis streamlined with clearer framing and the old TODO callouts are removed, a newai-engineering-loop.mdxpage is added as the conceptual entry point for the lifecycle, and threemeta.jsonfiles are updated to use gerund-form titles ("Building Datasets", "Experimenting", "Evaluating") for consistency.Confidence Score: 5/5
Safe to merge — documentation-only changes with no broken links or rendering issues.
All changed files are MDX content and JSON navigation config. All internal links in the new page resolve to existing Academy sections. The only findings are minor whitespace style issues (P2). No P0 or P1 issues found.
No files require special attention.
Important Files Changed
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A([Academy Index\nindex.mdx]) --> B([AI Engineering Loop\nai-engineering-loop.mdx]) B --> C([Tracing\n/academy/tracing]) B --> D([Monitoring\n/academy/monitoring]) B --> E([Building Datasets\n/academy/datasets]) B --> F([Experimenting\n/academy/experiments]) B --> G([Evaluating\n/academy/evaluate]) A --> C A --> D A --> E A --> F A --> GComments Outside Diff (2)
content/academy/monitoring/overview.mdx, line 598-602 (link)This
<Callout>block contains raw internal working notes that will render as visible content for site visitors. TheIMPORTANT:annotation and the "Add more details…" instruction are author reminders, not reader-facing content. The same pattern appears across several other Academy pages (e.g.datasets/overview.mdxhas three TODO callouts,evaluate/overview.mdxhas three,tracing/overview.mdxhas three,experiments/overview.mdxhas two). All of these will be publicly visible as rendered callout components until they are resolved or removed.Prompt To Fix With AI
content/academy/monitoring/overview.mdx, line 608 (link)/academy/error-analysisis linked here (and referenced inline at line 576 as "covered in depth in error analysis"), but nocontent/academy/error-analysis/directory or MDX file exists in this PR and the topic does not appear incontent/academy/meta.json. Clicking this link will produce a 404 via theAcademyNotFoundcomponent.Prompt To Fix With AI
Prompt To Fix All With AI
Reviews (2): Last reviewed commit: "Merge branch 'langfuse-academy-research'..." | Re-trigger Greptile